Closed
Conversation
Member
|
Please adhere to the Contribution Guidelines when submitting PRs. Submissions without the relevant disclaimers will be auto-rejected. |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
This PR introduces an end-to-end implementation of Step3.5 Multi-Token Prediction (MTP) in
llama.cpp, covering model conversion, loading, runtime graph execution, speculative decoding, server integration, and quantization compatibility.The current implementation is intentionally specialized for Step3.5 and focuses on enabling a complete and usable MTP pipeline within the existing architecture. It allows early experimentation with Step3.5-style MTP and provides a concrete reference for evaluating performance and behavior in practice.
This PR does not attempt to define a generalized MTP abstraction. Instead, it provides a concrete baseline that can be used for experimentation, benchmarking, and informing future design directions toward a more general solution.
Implemented
nextn/ shared-head tensors needed by the current runtime path and register them as proper per-layer tensors, currently keeping the first MTP layer only.-mtpis enabled instead of always pulling the extra tensors into the runtime.iswabuilder, while refactoring layer construction so the normal decoder path and the MTP path share the same core logic.imatrixrequirements for them.Not Yet Completed
Testing
These numbers are taken from the server log line:
For these runs,
-mtpenables the Step3.5 MTP path introduced in this PR: it loads the MTP weights and makes the server run the MTP speculative decoding flow being measured below.Decode Speed on 8xH200
Model weights:
fp16, KV cache:fp16Model weights:
fp16, KV cache:q8_0Model weights:
fp16, KV cache:q4_0Model weights:
Q4_K_S, KV cache:fp16Model weights:
IQ4_XS, KV cache:fp16Decode Speed on Mac Studio
Performance on Mac Studio is currently weaker than I expected. Profiling so far suggests that the main-model verify path on the Metal backend is relatively expensive. If anyone has suggestions or useful leads here, feedback would be very welcome.
nohup ./llama-server \ -m ../../../step3p5_flash_IQ4_XS.gguf \ --host 0.0.0.0 --port 8080 \ -c 131072 \ -ctk f16 -ctv f16 \ -np 1 -cb \ -b 4096 -ub 2048 \ --jinja \ --reasoning-format none \ -mtp --draft 3 -cram 0 \ > ./llama-server_mtp.log 2>&1 &Model weights:
IQ4_XS, KV cache:fp16Model weights:
Q4_K_S, KV cache:fp16, MTP KV cache:q4_0Decode Speed on DGX Spark
Model weights:
IQ4_XS, KV cache:fp16Model weights:
Q4_K_S, KV cache:q8_0, MTP KV cache:q4_0Related Work